12 research outputs found

    Generation of model-based safety arguments from automatically allocated safety integrity levels

    Get PDF
    To certify safety-critical systems, assurance arguments linking evidence of safety to appropriate requirements must be constructed. However, modern safety-critical systems feature increasing complexity and integration, which render manual approaches impractical to apply. This thesis addresses this problem by introducing a model-based method, with an exemplary application based on the aerospace domain.Previous work has partially addressed this problem for slightly different applications, including verification-based, COTS, product-line and process-based assurance. Each of the approaches is applicable to a specialised case and does not deliver a solution applicable to a generic system in a top-down process. This thesis argues that such a solution is feasible and can be achieved based on the automatic allocation of safety requirements onto a system’s architecture. This automatic allocation is a recent development which combines model-based safety analysis and optimisation techniques. The proposed approach emphasises the use of model-based safety analysis, such as HiP-HOPS, to maximise the benefits towards the system development lifecycle.The thesis investigates the background and earlier work regarding construction of safety arguments, safety requirements allocation and optimisation. A method for addressing the problem of optimal safety requirements allocation is first introduced, using the Tabu Search optimisation metaheuristic. The method delivers satisfactory results that are further exploited for construction of safety arguments. Using the produced requirements allocation, an instantiation algorithm is applied onto a generic safety argument pattern, which is compliant with standards, to automatically construct an argument establishing a claim that a system’s safety requirements have been met. This argument is hierarchically decomposed and shows how system and subsystem safety requirements are satisfied by architectures and analyses at low levels of decomposition. Evaluation on two abstract case studies demonstrates the feasibility and scalability of the method and indicates good performance of the algorithms proposed. Limitations and potential areas of further investigation are identified

    Automating allocation of development assurance levels: An extension to HiP-HOPS

    Get PDF
    Controlling the allocation of safety requirements across a system's architecture from the early stages of development is an aspiration embodied in numerous major safety standards. Manual approaches of applying this process in practice are ineffective due to the scale and complexity of modern electronic systems. In the work presented here, we aim to address this issue by presenting an extension to the dependability analysis and optimisation tool, HiP-HOPS, which allows automatic allocation of such requirements. We focus on aerospace requirements expressed as Development Assurance Levels (DALs); however, the proposed process and algorithms can be applied to other common forms of expression of safety requirements such as Safety Integrity Levels. We illustrate application to a model of an aircraft wheel braking system

    A synthesis of logic and bio-inspired techniques in the design of dependable systems

    Get PDF
    Much of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules

    Exploring the impact of different cost heuristics in the allocation of safety integrity levels

    Get PDF
    Contemporary safety standards prescribe processes in which system safety requirements, captured early and expressed in the form of Safety Integrity Levels (SILs), are iteratively allocated to architectural elements. Different SILs reflect different requirements stringencies and consequently different development costs. Therefore, the allocation of safety requirements is not a simple problem of applying an allocation "algebra" as treated by most standards; it is a complex optimisation problem, one of finding a strategy that minimises cost whilst meeting safety requirements. One difficulty is the lack of a commonly agreed heuristic for how costs increase between SILs. In this paper, we define this important problem; then we take the example of an automotive system and using an automated approach show that different cost heuristics lead to different optimal SIL allocations. Without automation it would have been impossible to explore the vast space of allocations and to discuss the subtleties involved in this problem

    Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations

    Full text link
    Machine learning is currently undergoing an explosion in capability, popularity, and sophistication. However, one of the major barriers to widespread acceptance of machine learning (ML) is trustworthiness: most ML models operate as black boxes, their inner workings opaque and mysterious, and it can be difficult to trust their conclusions without understanding how those conclusions are reached. Explainability is therefore a key aspect of improving trustworthiness: the ability to better understand, interpret, and anticipate the behaviour of ML models. To this end, we propose SMILE, a new method that builds on previous approaches by making use of statistical distance measures to improve explainability while remaining applicable to a wide range of input data domains

    Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations

    Get PDF
    Machine learning is currently undergoing an explosion in capability, popularity, and sophistication. However, one of the major barriers to widespread acceptance of machine learning (ML) is trustworthiness: most ML models operate as black boxes, their inner workings opaque and mysterious, and it can be difficult to trust their conclusions without understanding how those conclusions are reached. Explainability is therefore a key aspect of improving trustworthiness: the ability to better understand, interpret, and anticipate the behaviour of ML models. To this end, we propose SMILE, a new method that builds on previous approaches by making use of statistical distance measures to improve explainability while remaining applicable to a wide range of input data domains

    Keep your Distance: Determining Sampling and Distance Thresholds in Machine Learning Monitoring

    Full text link
    Machine Learning~(ML) has provided promising results in recent years across different applications and domains. However, in many cases, qualities such as reliability or even safety need to be ensured. To this end, one important aspect is to determine whether or not ML components are deployed in situations that are appropriate for their application scope. For components whose environments are open and variable, for instance those found in autonomous vehicles, it is therefore important to monitor their operational situation to determine its distance from the ML components' trained scope. If that distance is deemed too great, the application may choose to consider the ML component outcome unreliable and switch to alternatives, e.g. using human operator input instead. SafeML is a model-agnostic approach for performing such monitoring, using distance measures based on statistical testing of the training and operational datasets. Limitations in setting SafeML up properly include the lack of a systematic approach for determining, for a given application, how many operational samples are needed to yield reliable distance information as well as to determine an appropriate distance threshold. In this work, we address these limitations by providing a practical approach and demonstrate its use in a well known traffic sign recognition problem, and on an example using the CARLA open-source automotive simulator

    SafeDrones: Real-Time Reliability Evaluation of UAVs using Executable Digital Dependable Identities

    Full text link
    The use of Unmanned Arial Vehicles (UAVs) offers many advantages across a variety of applications. However, safety assurance is a key barrier to widespread usage, especially given the unpredictable operational and environmental factors experienced by UAVs, which are hard to capture solely at design-time. This paper proposes a new reliability modeling approach called SafeDrones to help address this issue by enabling runtime reliability and risk assessment of UAVs. It is a prototype instantiation of the Executable Digital Dependable Identity (EDDI) concept, which aims to create a model-based solution for real-time, data-driven dependability assurance for multi-robot systems. By providing real-time reliability estimates, SafeDrones allows UAVs to update their missions accordingly in an adaptive manner

    A synthesis of logic and bio-inspired techniques in the design of dependable systems

    Get PDF
    YesMuch of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules

    Securing a Dependability Improvement Mechanism for Cyber Physical Systems

    No full text
    The open and cooperative nature of Cyber-Physical Systems (CPS) poses a signifi-cant new challenge in assuring dependability. A European funded project named DEIS addresses this important and unsolved challenge by developing technologies that facilitate the efficient synthesis of components and systems based on their de-pendability information. The key innovation that is the aim of DEIS is the corre-sponding concept of a Digital Dependability Identity (DDI). A DDI contains all the information that uniquely describes the dependability characteristics of a CPS or CPS component. In this paper we present an overview of the DDI, and provide the protocol for ensur-ing the security of the DDI while it is in transit and rest. Additionally, we provide con-fidentiality, integrity and availability validation of the protocol
    corecore